46 research outputs found

    Preamble design using embedded signalling for OFDM broadcast systems based on reduced-complexity distance detection

    No full text
    The second generation digital terrestrial television broadcasting standard (DVB-T2) adopts the so-called P1 symbol as the preamble for initial synchronization. The P1 symbol also carries a number of basic transmission parameters, including the fast Fourier transform size and the single-input/single-output as well as multiple-input/single-output mode, in order to appropriately configure the receiver for carrying out the subsequent processing. In this contribution, an improved preamble design is proposed, where a pair of training sequences is inserted in the frequency domain and their distance is used for transmission parameter signalling. At the receiver, only a low-complexity correlator is required for the detection of the signalling. Both the coarse carrier frequency offset and the signalling can be simultaneously estimated by detecting the above-mentioned correlation. Compared to the standardised P1 symbol, the proposed preamble design significantly reduces the complexity of the receiver while retaining high robustness in frequency-selective fading channels. Furthermore, we demonstrate that the proposed preamble design achieves a better signalling performance than the standardised P1 symbol, despite reducing the numbers of multiplications and additions by about 40% and 20%, respectively

    Knowledge-Aided STAP Using Low Rank and Geometry Properties

    Full text link
    This paper presents knowledge-aided space-time adaptive processing (KA-STAP) algorithms that exploit the low-rank dominant clutter and the array geometry properties (LRGP) for airborne radar applications. The core idea is to exploit the fact that the clutter subspace is only determined by the space-time steering vectors, {red}{where the Gram-Schmidt orthogonalization approach is employed to compute the clutter subspace. Specifically, for a side-looking uniformly spaced linear array, the} algorithm firstly selects a group of linearly independent space-time steering vectors using LRGP that can represent the clutter subspace. By performing the Gram-Schmidt orthogonalization procedure, the orthogonal bases of the clutter subspace are obtained, followed by two approaches to compute the STAP filter weights. To overcome the performance degradation caused by the non-ideal effects, a KA-STAP algorithm that combines the covariance matrix taper (CMT) is proposed. For practical applications, a reduced-dimension version of the proposed KA-STAP algorithm is also developed. The simulation results illustrate the effectiveness of our proposed algorithms, and show that the proposed algorithms converge rapidly and provide a SINR improvement over existing methods when using a very small number of snapshots.Comment: 16 figures, 12 pages. IEEE Transactions on Aerospace and Electronic Systems, 201

    Improving the Performance of R17 Type-II Codebook with Deep Learning

    Full text link
    The Type-II codebook in Release 17 (R17) exploits the angular-delay-domain partial reciprocity between uplink and downlink channels to select part of angular-delay-domain ports for measuring and feeding back the downlink channel state information (CSI), where the performance of existing deep learning enhanced CSI feedback methods is limited due to the deficiency of sparse structures. To address this issue, we propose two new perspectives of adopting deep learning to improve the R17 Type-II codebook. Firstly, considering the low signal-to-noise ratio of uplink channels, deep learning is utilized to accurately select the dominant angular-delay-domain ports, where the focal loss is harnessed to solve the class imbalance problem. Secondly, we propose to adopt deep learning to reconstruct the downlink CSI based on the feedback of the R17 Type-II codebook at the base station, where the information of sparse structures can be effectively leveraged. Besides, a weighted shortcut module is designed to facilitate the accurate reconstruction. Simulation results demonstrate that our proposed methods could improve the sum rate performance compared with its traditional R17 Type-II codebook and deep learning benchmarks.Comment: Accepted by IEEE GLOBECOM 2023, conference version of Arxiv:2305.0808

    An Assessment of Anthropogenic CO_2 Emissions by Satellite-Based Observations in China

    Get PDF
    Carbon dioxide (CO_2) is the most important anthropogenic greenhouse gas and its concentration in atmosphere has been increasing rapidly due to the increase of anthropogenic CO_2 emissions. Quantifying anthropogenic CO_2 emissions is essential to evaluate the measures for mitigating climate change. Satellite-based measurements of greenhouse gases greatly advance the way of monitoring atmospheric CO2 concentration. In this study, we propose an approach for estimating anthropogenic CO_2 emissions by an artificial neural network using column-average dry air mole fraction of CO_2 (XCO_2) derived from observations of Greenhouse gases Observing SATellite (GOSAT) in China. First, we use annual XCO_2 anomalies (dXCO_2) derived from XCO_2 and anthropogenic emission data during 2010–2014 as the training dataset to build a General Regression Neural Network (GRNN) model. Second, applying the built model to annual dXCO_2 in 2015, we estimate the corresponding emission and verify them using ODIAC emission. As a results, the estimated emissions significantly demonstrate positive correlation with that of ODIAC CO_2 emissions especially in the areas with high anthropogenic CO_2 emissions. Our results indicate that XCO_2 data from satellite observations can be applied in estimating anthropogenic CO_2 emissions at regional scale by the machine learning. This developed method can estimate carbon emission inventory in a data-driven way. In particular, it is expected that the estimation accuracy can be further improved when combined with other data sources, related CO_2 uptake and emissions, from satellite observations
    corecore